Perceptron: Learning, Generalization, Model Selection, Fault Tolerance, and Role in the Deep Learning Era
نویسندگان
چکیده
The single-layer perceptron, introduced by Rosenblatt in 1958, is one of the earliest and simplest neural network models. However, it incapable classifying linearly inseparable patterns. A new era research started 1986, when backpropagation (BP) algorithm was rediscovered for training multilayer perceptron (MLP) model. An MLP with a large number hidden nodes can function as universal approximator. To date, model most fundamental important It also investigated Even this AI or deep learning era, still among few used Numerous results have been obtained past three decades. This survey paper gives comprehensive state-of-the-art introduction to model, emphasis on learning, generalization, selection fault tolerance. role described. provides concluding covers all major achievements seven serves tutorial learning.
منابع مشابه
the role of vocabulary learning strategies on vocabulary retention and on language proficiency in iranian efl students
آموزش زبان دوم طی سالهای اخیر بدنبال روشهای بهتری برای تحقق بخشیدن به اهداف معلمین و دانش آموزان بوده است . در مورد معلمین این امر منجر به تحقیقاتی در مورد ساختار زبانی، محاوره ای و تعاملی گردیده است . در مورد دانش آموزان این امر به مطالعاتی درباره نگرش دانش آموزان نسبت به فراگیری در داخل کلاس و بیرون از آن و همچنین انواع مختلف روشهای پردازش ذهنی منجر شده است . هدف از این تحقیق یافتن روشهائی اس...
15 صفحه اولGeneralization in Deep Learning
With a direct analysis of neural networks, this paper presents a mathematically tight generalization theory to partially address an open problem regarding the generalization of deep learning. Unlike previous bound-based theory, our main theory is quantitatively as tight as possible for every dataset individually, while producing qualitative insights competitively. Our results give insight into ...
متن کاملExploring Generalization in Deep Learning
With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures e...
متن کاملSynaptic Weight Noise During MLP Learning Enhances Fault-Tolerance, Generalization and Learning Trajectory
We analyse the effects of analog noise on the synaptic arithmetic during MultiLayer Perceptron training, by expanding the cost function to include noise-mediated penalty terms. Predictions are made in the light of these calculations which suggest that fault tolerance, generalisation ability and learning trajectory should be improved by such noise-injection. Extensive simulation experiments on t...
متن کاملthe relationship between using language learning strategies, learners’ optimism, educational status, duration of learning and demotivation
with the growth of more humanistic approaches towards teaching foreign languages, more emphasis has been put on learners’ feelings, emotions and individual differences. one of the issues in teaching and learning english as a foreign language is demotivation. the purpose of this study was to investigate the relationship between the components of language learning strategies, optimism, duration o...
15 صفحه اولذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mathematics
سال: 2022
ISSN: ['2227-7390']
DOI: https://doi.org/10.3390/math10244730